We explore unifying a neural segmenter with two-pass cascaded encoder ASR into a single model. A key challenge is allowing the segmenter (which runs in real-time, synchronously with the decoder) to finalize the 2nd pass (which runs 900 ms behind real-time) without introducing user-perceived latency or deletion errors during inference. We propose a design where the neural segmenter is integrated with the causal 1st pass decoder to emit a end-of-segment (EOS) signal in real-time. The EOS signal is then used to finalize the non-causal 2nd pass. We experiment with different ways to finalize the 2nd pass, and find that a novel dummy frame injection strategy allows for simultaneous high quality 2nd pass results and low finalization latency. On a real-world long-form captioning task (YouTube), we achieve 2.4% relative WER and 140 ms EOS latency gains over a baseline VAD-based segmenter with the same cascaded encoder.
translated by 谷歌翻译
由于无标记的文本和语音数据的广泛可用性,最近基于仅音频数据的仅文本和半监督培训已广受欢迎。在这项工作中,我们建议将纯文本和半监督培训纳入基于注意力的审议模型。通过将纯文本数据合并到培训审议文本编码器的变压器(BERT)的双向编码器表示中,以及使用联合声学和文本解码器(JATD)和半诉讼程序的大规模文本到语音和纯音频和音频话语培训,与基线审议相比,我们的各种任务减少了4%-12%。与最先进的语言模型(LM)纠正方法相比,审议模型将Google语音搜索降低了11%。我们表明,与具有合理的终端潜伏期的最先进的LM委员相比,审议模型还获得了正面的人类并排评估。
translated by 谷歌翻译
在长时间到数小时的长时间话语中,提高端到端ASR模型的性能是语音识别的持续挑战。一个常见的解决方案是使用单独的语音活动检测器(VAD)事先将音频分割,该声音活动检测器(VAD)纯粹基于声音/非语音信息来决定段边界位置。但是,VAD细分器可能是现实世界语音的最佳选择,例如,一个完整的句子应该整体上可能包含犹豫(“设置... 5点钟的警报”) 。我们建议用端到端的ASR模型替换VAD,能够以流方式预测段边界,从而使细分决定不仅在更好的声学特征上,而且还可以在解码文本的语义特征上进行,并具有可忽略的额外功能计算。在现实世界长音频(YouTube)的实验中,长度长达30分钟,我们证明了相对改善的8.5%,并且与VAD段基线相比,中位段延迟潜伏期的中位数延迟延迟减少了250毫秒。 - ART构象体RNN-T模型。
translated by 谷歌翻译
语言模型(LMS)显着提高端到端模型(E2E)模型在训练过程中很少见的单词的识别准确性,当时在浅融合或重新恢复设置中。在这项工作中,我们介绍了LMS在判别培训框架中学习混合自动回旋传感器(HAT)模型的研究,以减轻有关使用LMS的训练与推理差距。对于浅融合设置,我们在假设生成和损失计算过程中都使用LMS,而LM感知的MWER训练模型可实现10 \%的相对改进,比用标准MWER在语音搜索测试集中培训的模型相对改进,其中包含稀有单词。对于重新设置,我们学会了一个小型神经模块,以数据依赖性方式产生串联的融合权重。该模型与常规MWER训练的模型相同,但无需清除融合重量。
translated by 谷歌翻译
在本文中,我们提出了一个动态的级联编码器自动语音识别(ASR)模型,该模型统一了不同部署方案的模型。此外,该模型可以显着降低模型尺寸和功耗而不会损失质量。也就是说,使用动态级联编码器模型,我们探索了三种技术,以最大程度地提高每个模型大小的性能:1)在共享编码器时为每个子模型使用单独的解码器;2)使用漏斗 - 提高编码器效率;3)平衡因果关系的大小,以提高质量和适合部署限制。总体而言,与基线级联编码器模型相比,拟议的大中等模型的尺寸较小30%,并将功耗降低了33%。统一大型,中和小型模型的三重大小模型可实现37%的总尺寸减少,而质量损失最小,同时大大减少了拥有单独模型的工程工作。
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
Damage to the inferior frontal gyrus (Broca's area) can cause agrammatic aphasia wherein patients, although able to comprehend, lack the ability to form complete sentences. This inability leads to communication gaps which cause difficulties in their daily lives. The usage of assistive devices can help in mitigating these issues and enable the patients to communicate effectively. However, due to lack of large scale studies of linguistic deficits in aphasia, research on such assistive technology is relatively limited. In this work, we present two contributions that aim to re-initiate research and development in this field. Firstly, we propose a model that uses linguistic features from small scale studies on aphasia patients and generates large scale datasets of synthetic aphasic utterances from grammatically correct datasets. We show that the mean length of utterance, the noun/verb ratio, and the simple/complex sentence ratio of our synthetic datasets correspond to the reported features of aphasic speech. Further, we demonstrate how the synthetic datasets may be utilized to develop assistive devices for aphasia patients. The pre-trained T5 transformer is fine-tuned using the generated dataset to suggest 5 corrected sentences given an aphasic utterance as input. We evaluate the efficacy of the T5 model using the BLEU and cosine semantic similarity scores. Affirming results with BLEU score of 0.827/1.00 and semantic similarity of 0.904/1.00 were obtained. These results provide a strong foundation for the concept that a synthetic dataset based on small scale studies on aphasia can be used to develop effective assistive technology.
translated by 谷歌翻译
This work introduces the novel task of Source-free Multi-target Domain Adaptation and proposes adaptation framework comprising of \textbf{Co}nsistency with \textbf{N}uclear-Norm Maximization and \textbf{Mix}Up knowledge distillation (\textit{CoNMix}) as a solution to this problem. The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing. The source-free approach leverages target pseudo labels, which can be noisy, to improve the target adaptation. We introduce consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, we propose novel MixUp Knowledge Distillation (MKD) for better generalization on multiple target domains using various source-free STDA models. We also show that the Vision Transformer (VT) backbone gives better feature representation with improved domain transferability and class discriminability. Our proposed framework achieves the state-of-the-art (SOTA) results in various paradigms of source-free STDA and MTDA settings on popular domain adaptation datasets like Office-Home, Office-Caltech, and DomainNet. Project Page: https://sites.google.com/view/conmix-vcl
translated by 谷歌翻译